This script takes a deep dive into Landsat 7 labels for a more rigorous analysis of inconsistent band data and outliers in the filtered label dataset. Here we will determine if any more label data points should be removed from the training dataset and whether or not we can glean anything from the metadata in the outlier dataset to be able to pre-emptively toss out scenes when we go to apply the classification algorithm.
harmonize_version = "v2024-04-17"
outlier_version = "v2024-04-17"
LS7 <- read_rds(paste0("data/labels/harmonized_LS57_labels_", harmonize_version, ".RDS")) %>%
filter(mission == "LANDSAT_7")
Just look at the data to see consistent (or inconsistent) user-pulled data and our pull, here, our user data are in “BX” format and the re-pull is in “SR_BX” format. These are steps to assure data quality if the volunteer didn’t follow the directions explicitly.
pmap(.l = list(user_band = LS57_user,
ee_band = LS57_ee,
data = list(LS7),
mission = list("LANDSAT_7")),
.f = make_band_comp_plot)
## [[1]]
##
## [[2]]
##
## [[3]]
##
## [[4]]
##
## [[5]]
##
## [[6]]
There is some mis-match here, let’s look at those data, in this case, we’ll just use B7/SR_B7 as a reference to filter inconsistent labels
LS7_inconsistent <- LS7 %>%
filter((is.na(SR_B7) | SR_B7 != B7))
LS7_inconsistent %>%
group_by(class) %>%
summarise(n_labels = n()) %>%
kable()
| class | n_labels |
|---|---|
| cloud | 188 |
| darkNearShoreSediment | 1 |
| lightNearShoreSediment | 4 |
| offShoreSediment | 3 |
| openWater | 1 |
| other | 2 |
| shorelineContamination | 5 |
Most of these are cloud labels, where the pixel is saturated, and then masked in the re-pull value (resulting in an NA). Let’s drop those from this subset and then look more.
LS7_inconsistent <- LS7_inconsistent %>%
filter(!(class == "cloud" & is.na(SR_B7)))
This leaves 0.8% of the Landsat 7 labels as inconsistent. Let’s do a quick sanity check to make sure that we’ve dropped values that are inconsistent between pulls:
LS7_filtered <- LS7 %>%
filter(# filter data where SR_B7 has data and where the values match between the two
# pulls.
(!is.na(SR_B7) & SR_B7 == B7) |
# or where the user-specified class is cloud and the pixel was saturated
# providing no surface refelctance data
(class == "cloud" & is.na(SR_B7)),
# or where any re-pulled band value is greater than 1, which isn't a valid value
if_all(LS57_ee,
~ . <= 1))
And plot:
## [[1]]
##
## [[2]]
##
## [[3]]
##
## [[4]]
##
## [[5]]
##
## [[6]]
Still a few mis-matched visible in B5/SR_B5.
LS7_inconsistent <- LS7_filtered %>%
filter(B5 != SR_B5) %>%
bind_rows(., LS7_inconsistent)
LS7_filtered <- LS7_filtered %>%
filter(B5 == SR_B5)
And now let’s look at the data by class:
## [[1]]
##
## [[2]]
##
## [[3]]
##
## [[4]]
##
## [[5]]
##
## [[6]]
We aren’t actually modeling “other” (not sufficient observations to classify) or “shorelineContamination” (we’ll use this later to block areas where there is likely shoreline contamination in the AOI), so let’s drop those categories and look at the data again.
LS7_for_class_analysis <- LS7_filtered %>%
filter(!(class %in% c("other", "shorelineContamination")))
## [[1]]
##
## [[2]]
##
## [[3]]
##
## [[4]]
##
## [[5]]
##
## [[6]]
Let’s also go back and check to see if there is any pattern to the inconsistent labels.
| vol_init | n_tot_labs | n_dates |
|---|---|---|
| BGS | 5 | 4 |
| LAE | 5 | 3 |
| LRCP | 4 | 2 |
| SKS | 3 | 2 |
| ANK | 1 | 1 |
| FYC | 1 | 1 |
There seem to be just a few inconsistencies here and across multiple dates. This could just be a processing difference (if there happened to be an update to a scene since users pulled these data or if these were on an overlapping portion of two scenes). I’m not concerned about any systemic errors here that might require modified data handling for a specific scene or contributor.
There are statistical outliers within this dataset and they may impact the interpretation of any statistical testing we do. Let’s see if we can narrow down when those outliers and/or glean anything from the outlier data that may be applicable to the the application of the algorithm. Outliers may be a systemic issue (as in the scene is an outlier), it could be a user issue (a user may have been a bad actor), or they just might be real. This section asks those questions. The “true outliers” that we dismiss from the dataset will also be used to help aid in interpretation/application of the algorithm across the Landsat stack, so it is important to make notes of any patterns we might see in the outlier dataset.
## [1] "Classes represented in outliers:"
## [1] "darkNearShoreSediment" "lightNearShoreSediment" "offShoreSediment"
## [4] "openWater"
Okay, 90 outliers (>1.5*IQR) out of 1640 - and they are all from non-cloud groups.
Are there any contributors that show up more than others in the outliers dataset?
LS7_vol <- LS7_for_class_analysis %>%
filter(class != "cloud") %>%
group_by(vol_init) %>%
summarise(n_tot = n()) %>%
arrange(-n_tot)
LS7_out_vol <- outliers %>%
group_by(vol_init) %>%
summarise(n_out = n()) %>%
arrange(-n_out)
full_join(LS7_vol, LS7_out_vol) %>%
mutate(percent_outlier = n_out/n_tot*100) %>%
arrange(-percent_outlier) %>%
kable()
| vol_init | n_tot | n_out | percent_outlier |
|---|---|---|---|
| SKS | 309 | 32 | 10.355987 |
| BGS | 352 | 33 | 9.375000 |
| LRCP | 272 | 22 | 8.088235 |
| LAE | 47 | 2 | 4.255319 |
| HAD | 65 | 1 | 1.538461 |
| AMP | 54 | NA | NA |
| FYC | 51 | NA | NA |
| ANK | 44 | NA | NA |
These are along the same lines as the LS5 data, at or below 10% and generally the more labels, the more outliers.
How many of these outliers are in specific scenes?
LS7_out_date <- outliers %>%
group_by(date, vol_init) %>%
summarize(n_out = n())
LS7_date <- LS7_for_class_analysis %>%
filter(class != "cloud") %>%
group_by(date, vol_init) %>%
summarise(n_tot = n())
LS7_out_date <- left_join(LS7_out_date, LS7_date) %>%
mutate(percent_outlier = n_out/n_tot*100) %>%
arrange(-percent_outlier)
LS7_out_date %>%
kable()
| date | vol_init | n_out | n_tot | percent_outlier |
|---|---|---|---|---|
| 2007-07-15 | SKS | 26 | 84 | 30.952381 |
| 2015-07-05 | BGS | 11 | 40 | 27.500000 |
| 2015-10-09 | BGS | 9 | 39 | 23.076923 |
| 2013-08-16 | LRCP | 12 | 108 | 11.111111 |
| 2016-11-12 | LAE | 2 | 18 | 11.111111 |
| 2020-08-03 | BGS | 7 | 71 | 9.859155 |
| 2004-06-20 | LRCP | 6 | 67 | 8.955224 |
| 2006-09-14 | BGS | 5 | 77 | 6.493506 |
| 2005-09-27 | SKS | 5 | 82 | 6.097561 |
| 2003-05-01 | LRCP | 4 | 97 | 4.123711 |
| 1999-10-29 | HAD | 1 | 65 | 1.538461 |
| 2011-04-05 | BGS | 1 | 70 | 1.428571 |
| 2017-07-26 | SKS | 1 | 82 | 1.219512 |
There are three scenes here that have very high outliers - perhaps there is something about the AC in these particular scenes? or the general scene quality?
LS7_out_date %>%
filter(percent_outlier > 20) %>%
select(date, vol_init) %>%
left_join(., LS7) %>%
select(date, vol_init, CLOUD_COVER:DATA_SOURCE_WATER_VAPOR) %>%
distinct() %>%
kable()
| date | vol_init | CLOUD_COVER | IMAGE_QUALITY | DATA_SOURCE_AIR_TEMPERATURE | DATA_SOURCE_ELEVATION | DATA_SOURCE_OZONE | DATA_SOURCE_PRESSURE | DATA_SOURCE_REANALYSIS | DATA_SOURCE_WATER_VAPOR |
|---|---|---|---|---|---|---|---|---|---|
| 2007-07-15 | SKS | 38.0, 38.0, 40.5, 43.0, 43.0 | 9, 9, 9, 9, 9 | NCEP | GLS2000 | TOMS | NCEP | GEOS-5 FP-IT | NCEP |
| 2015-07-05 | BGS | 33.0, 33.0, 44.5, 56.0, 56.0 | 9, 9, 9, 9, 9 | NCEP | GLS2000 | TOMS | NCEP | GEOS-5 FP-IT | NCEP |
| 2015-10-09 | BGS | 58.0, 58.0, 63.5, 69.0, 69.0 | 9, 9, 9, 9, 9 | NCEP | GLS2000 | TOMS | NCEP | GEOS-5 FP-IT | NCEP |
Image quality is high across the board, some more consistent >30% cloud cover, but there is nothing egregious or obvious here.
How many bands are represented in each labeled point that is an outlier? If there are outliers amongst the RGB bands (how users labeled data), there is probably a systemic problem. If the outliers are in singular bands, especially those that are not in the visible spectrum, we can dismiss the individual observations, and probably assert that the scene as a whole is okay to use in training. First pass, if there are 3 or more bands deemed outliers, let’s look at the bands that are outliers:
| date | class | vol_init | user_label_id | n_bands_out | bands_out |
|---|---|---|---|---|---|
| 2020-08-03 | openWater | BGS | 378 | 4 | SR_B1; SR_B2; SR_B3; SR_B4 |
| 2003-05-01 | darkNearShoreSediment | LRCP | 1456 | 3 | SR_B4; SR_B5; SR_B7 |
| 2007-07-15 | lightNearShoreSediment | SKS | 1196 | 3 | SR_B4; SR_B5; SR_B7 |
| 2007-07-15 | lightNearShoreSediment | SKS | 1197 | 3 | SR_B4; SR_B5; SR_B7 |
| 2007-07-15 | lightNearShoreSediment | SKS | 1220 | 3 | SR_B4; SR_B5; SR_B7 |
| 2015-10-09 | openWater | BGS | 1977 | 3 | SR_B1; SR_B2; SR_B3 |
| 2015-10-09 | openWater | BGS | 1978 | 3 | SR_B1; SR_B2; SR_B3 |
| 2015-10-09 | openWater | BGS | 1981 | 3 | SR_B1; SR_B2; SR_B3 |
| 2015-10-09 | openWater | BGS | 1982 | 3 | SR_B1; SR_B2; SR_B3 |
| 2015-10-09 | openWater | BGS | 1983 | 3 | SR_B1; SR_B2; SR_B3 |
| 2016-11-12 | offShoreSediment | LAE | 796 | 3 | SR_B4; SR_B5; SR_B7 |
| 2020-08-03 | openWater | BGS | 375 | 3 | SR_B1; SR_B2; SR_B3 |
| 2020-08-03 | openWater | BGS | 376 | 3 | SR_B1; SR_B2; SR_B3 |
| 2020-08-03 | openWater | BGS | 377 | 3 | SR_B1; SR_B2; SR_B3 |
Let’s group by image date and volunteer and tally up the number of labels where at least 3 bands where outliers:
| date | vol_init | n_labels |
|---|---|---|
| 2015-10-09 | BGS | 5 |
| 2020-08-03 | BGS | 4 |
| 2007-07-15 | SKS | 3 |
| 2003-05-01 | LRCP | 1 |
| 2016-11-12 | LAE | 1 |
These aren’t egregious numbers either. For kicks, let’s look through the top 3:
2015-10-09
2020-08-03:
2007-07-15:
Well, these all seem to have pretty high cloud contamination in the AOI. The first two images don’t seem to have much cirrus contamination, but the final one definitely does. Let’s see if there is anything happening on a pixel level that might help us weed out these outlier-type pixels or images.
Do any of the labels have QA pixel indications of cloud or cloud shadow? The first pass here is for all data that don’t have a label of “cloud” (not just outliers). Let’s see if the medium certainty classification in the QA band is useful here:
LS7_for_class_analysis %>%
mutate(QA = case_when(str_sub(QA_PIXEL_binary, 1, 2) %in% c(10, 11) ~ "cirrus",
str_sub(QA_PIXEL_binary, 3, 4) %in% c(10, 11) ~ "snow/ice",
str_sub(QA_PIXEL_binary, 5, 6) %in% c(10, 11) ~ "cloud shadow",
str_sub(QA_PIXEL_binary, 7, 8) %in% c(10, 11) ~ "cloud",
TRUE ~ "clear")) %>%
group_by(QA) %>%
filter(class != "cloud") %>%
summarize(n_tot = n()) %>%
kable()
| QA | n_tot |
|---|---|
| cirrus | 2 |
| clear | 69 |
| cloud | 1101 |
| snow/ice | 22 |
Well, that’s not helpful, considering that most of the label dataset has at least a medium confidence cloud QA flag as the pixel QA.
LS7_for_class_analysis %>%
mutate(QA = case_when(str_sub(QA_PIXEL_binary, 1, 2) == 11 ~ "cirrus",
str_sub(QA_PIXEL_binary, 3, 4) == 11 ~ "snow/ice",
str_sub(QA_PIXEL_binary, 5, 6) == 11 ~ "cloud shadow",
str_sub(QA_PIXEL_binary, 7, 8) == 11 ~ "cloud",
TRUE ~ "clear")) %>%
group_by(QA) %>%
filter(class != "cloud") %>%
summarize(n_tot = n()) %>%
kable()
| QA | n_tot |
|---|---|
| cirrus | 1 |
| clear | 1181 |
| snow/ice | 12 |
Okay, if we use only “high confidence”, this is better. Let’s look at the labels where pixels were classified as snow/ice:
LS7_for_class_analysis %>%
filter(str_sub(QA_PIXEL_binary, 3, 4) == 11) %>%
group_by(date, vol_init) %>%
summarise(n_out_snow_ice = n()) %>%
arrange(-n_out_snow_ice) %>%
kable()
| date | vol_init | n_out_snow_ice |
|---|---|---|
| 2020-08-03 | BGS | 4 |
| 2015-10-09 | BGS | 3 |
| 2005-09-27 | SKS | 2 |
| 2008-09-03 | FYC | 2 |
| 2015-07-05 | BGS | 1 |
| 2015-11-10 | FYC | 1 |
| 2020-09-04 | FYC | 1 |
Hmmm - I think there are some miss-classifications here, none of these instances are when I would expect ice or snow present.
What about the outliers?
outliers %>%
mutate(QA = case_when(str_sub(QA_PIXEL_binary, 1, 2) == 11 ~ "cirrus",
str_sub(QA_PIXEL_binary, 3, 4) == 11 ~ "snow/ice",
str_sub(QA_PIXEL_binary, 5, 6) == 11 ~ "cloud shadow",
str_sub(QA_PIXEL_binary, 7, 8) == 11 ~ "cloud",
TRUE ~ "clear")) %>%
group_by(QA) %>%
filter(class != "cloud") %>%
summarize(n_tot = n()) %>%
kable()
| QA | n_tot |
|---|---|
| clear | 88 |
| snow/ice | 2 |
Almost all of these are designated clear pixels. I am not going to pursue this further.
How many of these outliers have near-pixel clouds (as measured by ST_CDIST)?
There are 8 labels (8.9% of oultiers) that aren’t “cloud” in the outlier dataset that have a cloud distance <500m and 35 labels (2.1%) in the whole dataset that have a cloud distance <500m. Since this is about the same portion of labels (or they are not severely disproportionate), I don’t think this is terribly helpful.
How many of the outliers have high cloud cover, as reported by the scene-level metadata? Note, we don’t have the direct scene cloud cover associated with individual labels, rather a list of the scene level cloud cover values associated with the AOI.
The outlier dataset contains 7 (7.8%) where the max cloud cover was > 75% and 16 (17.8%) where the mean cloud cover was > 50%. The filtered dataset contains 76 (4.6%) where max was >75% and 115 (7%) where the mean cloud cover was > 50%. While there is a greater instance of higher CLOUD_COVER in the outliers, it’s not a large enough portion of the outlier dataset to say that we should just toss scenes of either case above.
Pixels can also be saturated in one or more bands, we need to make sure that the QA_RADSAT for all labels (including clouds) are set to zero.
LS7_for_class_analysis %>%
mutate(radsat = if_else(QA_RADSAT == 0,
"n",
"y")) %>%
group_by(radsat) %>%
summarize(n_tot = n()) %>%
kable()
| radsat | n_tot |
|---|---|
| n | 1640 |
Great! No bands are saturated!
For the purposes of training data, I think we can throw out the data from the three scenes that had > 20% outlier labels: 2007-07-15, 2015-07-08, 2015-10-09
LS7_training_labels <- LS7_for_class_analysis %>%
filter(!(date %in% c("2007-07-15", "2015-07-08", "2015-10-09")))
We do want to have an idea of how different the classes are, in regards to band data. While there are a bunch of interactions that we could get into here, for the sake of this analysis, we are going to analyze the class differences by band.
Kruskal-Wallis assumptions:
ANOVA assumptions:
We can’t entirely assert sample independence and we know that variance and distribution is different for “cloud” labels, but those data also are visibly different from the other classes.
In order to systematically test for differences between classes and be able to intepret the data, we will need to know some things about our data:
With this workflow, most classes are statistically different - below are the cases where the pairwise comparison were not deemed statistically significant:
## # A tibble: 5 × 9
## band group1 group2 n1 n2 statistic p p.adj p.adj.signif
## <chr> <chr> <chr> <int> <int> <dbl> <dbl> <dbl> <chr>
## 1 SR_B1 darkNearShoreSedi… offSh… 98 318 1.12 0.264 1 ns
## 2 SR_B2 darkNearShoreSedi… offSh… 98 318 -0.852 0.394 1 ns
## 3 SR_B4 darkNearShoreSedi… light… 98 299 0.791 0.429 1 ns
## 4 SR_B5 darkNearShoreSedi… light… 98 299 -0.442 0.658 1 ns
## 5 SR_B7 darkNearShoreSedi… light… 98 299 -1.24 0.216 1 ns
There is some consistency here: “darkNearShoreSediment” is often not different from other sediment types by band. It is entirely possible that band interactions overpower these non-significant differences.
DNSS: dark near shore sediment, LNSS: light near shore sediment, OSS: offshore sediment
There are definitely some varying patterns here, let’s zoom in on the sediment classes.
DNSS: dark near shore sediment, LNSS: light near shore sediment, OSS: offshore sediment
Hmm, the subtle differences from LS5 are not as evident here - definitely overlapping ranges, but maybe not as easily differentiable between DNSS and light/offshore. Something to keep in mind here.
Things to note for Landsat 7:
write_rds(LS7_training_labels, paste0("data/labels/LS7_labels_for_tvt_", outlier_version, ".RDS"))